Goto

Collaborating Authors

 Piura Department


Decentralized Online Convex Optimization with Unknown Feedback Delays

Qiu, Hao, Zhang, Mengxiao, Achddou, Juliette

arXiv.org Machine Learning

Decentralized online convex optimization (D-OCO), where multiple agents within a network collaboratively learn optimal decisions in real-time, arises naturally in applications such as federated learning, sensor networks, and multi-agent control. In this paper, we study D-OCO under unknown, time-and agent-varying feedback delays. While recent work has addressed this problem (Nguyen et al., 2024), existing algorithms assume prior knowledge of the total delay over agents and still suffer from suboptimal dependence on both the delay and network parameters. To overcome these limitations, we propose a novel algorithm that achieves an improved regret bound of O N $\sqrt$ d tot + N $\sqrt$ T (1-$σ$2) 1/4 , where T is the total horizon, d tot denotes the average total delay across agents, N is the number of agents, and 1 -$σ$ 2 is the spectral gap of the network. Our approach builds upon recent advances in D-OCO (Wan et al., 2024a), but crucially incorporates an adaptive learning rate mechanism via a decentralized communication protocol. This enables each agent to estimate delays locally using a gossip-based strategy without the prior knowledge of the total delay. We further extend our framework to the strongly convex setting and derive a sharper regret bound of O N $δ$max ln T $α$ , where $α$ is the strong convexity parameter and $δ$ max is the maximum number of missing observations averaged over agents. We also show that our upper bounds for both settings are tight up to logarithmic factors. Experimental results validate the effectiveness of our approach, showing improvements over existing benchmark algorithms.


Architecture-Aware Generalization Bounds for Temporal Networks: Theory and Fair Comparison Methodology

Gahtan, Barak, Bronstein, Alex M.

arXiv.org Artificial Intelligence

Deep temporal architectures such as TCNs achieve strong predictive performance on sequential data, yet theoretical understanding of their generalization remains limited. We address this gap through three contributions: introducing an evaluation methodology for temporal models, revealing surprising empirical phenomena about temporal dependence, and the first architecture-aware theoretical framework for dependent sequences. Fair-Comparison Methodology. We introduce evaluation protocols that fix effective sample size $N_{\text{eff}}$ to isolate temporal structure effects from information content. Empirical Findings. Applying this method reveals that under $N_{\text{eff}} = 2000$, strongly dependent sequences ($ρ= 0.8$) exhibit approx' $76\%$ smaller generalization gaps than weakly dependent ones ($ρ= 0.2$), challenging the conventional view that dependence universally impedes learning. However, observed convergence rates ($N_{\text{eff}}^{-1.21}$ to $N_{\text{eff}}^{-0.89}$) significantly exceed theoretical worst-case predictions ($N^{-0.5}$), revealing that temporal architectures exploit problem structure in ways current theory does not capture. Lastly, we develop the first architecture-aware generalization bounds for deep temporal models on exponentially $β$-mixing sequences. By embedding Golowich et al.'s i.i.d. class bound within a novel blocking scheme that partitions $N$ samples into approx' $B \approx N/\log N$ quasi-independent blocks, we establish polynomial sample complexity under convex Lipschitz losses. The framework achieves $\sqrt{D}$ depth scaling alongside the product of layer-wise norms $R = \prod_{\ell=1}^{D} M^{(\ell)}$, avoiding exponential dependence. While these bounds are conservative, they prove learnability and identify architectural scaling laws, providing worst-case baselines that highlight where future theory must improve.


Efficiently Attacking Memorization Scores

Do, Tue, Chandrasekaran, Varun, Alabi, Daniel

arXiv.org Artificial Intelligence

Influence estimation tools -- such as memorization scores -- are widely used to understand model behavior, attribute training data, and inform dataset curation. However, recent applications in data valuation and responsible machine learning raise the question: can these scores themselves be adversarially manipulated? In this work, we present a systematic study of the feasibility of attacking memorization-based influence estimators. We characterize attacks for producing highly memorized samples as highly sensitive queries in the regime where a trained algorithm is accurate. Our attack (calculating the pseudoinverse of the input) is practical, requiring only black-box access to model outputs and incur modest computational overhead. We empirically validate our attack across a wide suite of image classification tasks, showing that even state-of-the-art proxies are vulnerable to targeted score manipulations. In addition, we provide a theoretical analysis of the stability of memorization scores under adversarial perturbations, revealing conditions under which influence estimates are inherently fragile. Our findings highlight critical vulnerabilities in influence-based attribution and suggest the need for robust defenses. All code can be found at https://github.com/tuedo2/MemAttack


HPE:Answering Complex Questions over Text by Hybrid Question Parsing and Execution

Liu, Ye, Yavuz, Semih, Meng, Rui, Radev, Dragomir, Xiong, Caiming, Zhou, Yingbo

arXiv.org Artificial Intelligence

The dominant paradigm of textual question answering systems is based on end-to-end neural networks, which excels at answering natural language questions but falls short on complex ones. This stands in contrast to the broad adaptation of semantic parsing approaches over structured data sources (e.g., relational database, knowledge graphs), that convert natural language questions to logical forms and execute them with query engines. Towards combining the strengths of neural and symbolic methods, we propose a framework of question parsing and execution on textual QA. It comprises two central pillars: (1) We parse the question of varying complexity into an intermediate representation, named H-expression, which is composed of simple questions as the primitives and symbolic operations representing the relationships among them; (2) To execute the resulting H-expressions, we design a hybrid executor, which integrates the deterministic rules to translate the symbolic operations with a drop-in neural reader network to answer each decomposed simple question. Hence, the proposed framework can be viewed as a top-down question parsing followed by a bottom-up answer backtracking. The resulting H-expressions closely guide the execution process, offering higher precision besides better interpretability while still preserving the advantages of the neural readers for resolving its primitive elements. Our extensive experiments on MuSiQue, 2WikiQA, HotpotQA, and NQ show that the proposed parsing and hybrid execution framework outperforms existing approaches in supervised, few-shot, and zero-shot settings, while also effectively exposing its underlying reasoning process.


Approximation analysis of CNNs from a feature extraction view

Li, Jianfei, Feng, Han, Zhou, Ding-Xuan

arXiv.org Artificial Intelligence

Deep learning based on deep neural networks has been very successful in many practical applications, but it lacks enough theoretical understanding due to the network architectures and structures. In this paper we establish some analysis for linear feature extraction by a deep multi-channel convolutional neural networks (CNNs), which demonstrates the power of deep learning over traditional linear transformations, like Fourier, wavelets, redundant dictionary coding methods. Moreover, we give an exact construction presenting how linear features extraction can be conducted efficiently with multi-channel CNNs. It can be applied to lower the essential dimension for approximating a high dimensional function. Rates of function approximation by such deep networks implemented with channels and followed by fully-connected layers are investigated as well. Harmonic analysis for factorizing linear features into multi-resolution convolutions plays an essential role in our work. Nevertheless, a dedicate vectorization of matrices is constructed, which bridges 1D CNN and 2D CNN and allows us to have corresponding 2D analysis.


hvEEGNet: exploiting hierarchical VAEs on EEG data for neuroscience applications

Cisotto, Giulia, Zancanaro, Alberto, Zoppis, Italo F., Manzoni, Sara L.

arXiv.org Artificial Intelligence

With the recent success of artificial intelligence in neuroscience, a number of deep learning (DL) models were proposed for classification, anomaly detection, and pattern recognition tasks in electroencephalography (EEG). EEG is a multi-channel time-series that provides information about the individual brain activity for diagnostics, neuro-rehabilitation, and other applications (including emotions recognition). Two main issues challenge the existing DL-based modeling methods for EEG: the high variability between subjects and the low signal-to-noise ratio making it difficult to ensure a good quality in the EEG data. In this paper, we propose two variational autoencoder models, namely vEEGNet-ver3 and hvEEGNet, to target the problem of high-fidelity EEG reconstruction. We properly designed their architectures using the blocks of the well-known EEGNet as the encoder, and proposed a loss function based on dynamic time warping. We tested the models on the public Dataset 2a - BCI Competition IV, where EEG was collected from 9 subjects and 22 channels. hvEEGNet was found to reconstruct the EEG data with very high-fidelity, outperforming most previous solutions (including our vEEGNet-ver3 ). Furthermore, this was consistent across all subjects. Interestingly, hvEEGNet made it possible to discover that this popular dataset includes a number of corrupted EEG recordings that might have influenced previous literature results. We also investigated the training behaviour of our models and related it with the quality and the size of the input EEG dataset, aiming at opening a new research debate on this relationship. In the future, hvEEGNet could be used as anomaly (e.g., artefact) detector in large EEG datasets to support the domain experts, but also the latent representations it provides could be used in other classification problems and EEG data generation.


Submeter-level Land Cover Mapping of Japan

Yokoya, Naoto, Xia, Junshi, Broni-Bediako, Clifford

arXiv.org Artificial Intelligence

Deep learning has shown promising performance in submeter-level mapping tasks; however, the annotation cost of submeter-level imagery remains a challenge, especially when applied on a large scale. In this paper, we present the first submeter-level land cover mapping of Japan with eight classes, at a relatively low annotation cost. We introduce a human-in-the-loop deep learning framework leveraging OpenEarthMap, a recently introduced benchmark dataset for global submeter-level land cover mapping, with a U-Net model that achieves national-scale mapping with a small amount of additional labeled data. By adding a small amount of labeled data of areas or regions where a U-Net model trained on OpenEarthMap clearly failed and retraining the model, an overall accuracy of 80\% was achieved, which is a nearly 16 percentage point improvement after retraining. Using aerial imagery provided by the Geospatial Information Authority of Japan, we create land cover classification maps of eight classes for the entire country of Japan. Our framework, with its low annotation cost and high-accuracy mapping results, demonstrates the potential to contribute to the automatic updating of national-scale land cover mapping using submeter-level optical remote sensing data. The mapping results will be made publicly available.


DPATD: Dual-Phase Audio Transformer for Denoising

Li, Junhui, Wang, Pu, Li, Jialu, Wang, Xinzhe, Zhang, Youshan

arXiv.org Artificial Intelligence

Recent high-performance transformer-based speech enhancement models demonstrate that time domain methods could achieve similar performance as time-frequency domain methods. However, time-domain speech enhancement systems typically receive input audio sequences consisting of a large number of time steps, making it challenging to model extremely long sequences and train models to perform adequately. In this paper, we utilize smaller audio chunks as input to achieve efficient utilization of audio information to address the above challenges. We propose a dual-phase audio transformer for denoising (DPATD), a novel model to organize transformer layers in a deep structure to learn clean audio sequences for denoising. DPATD splits the audio input into smaller chunks, where the input length can be proportional to the square root of the original sequence length. Our memory-compressed explainable attention is efficient and converges faster compared to the frequently used self-attention module. Extensive experiments demonstrate that our model outperforms state-of-the-art methods.


The noise level in linear regression with dependent data

Ziemann, Ingvar, Tu, Stephen, Pappas, George J., Matni, Nikolai

arXiv.org Machine Learning

Ordinary least squares (OLS) regression from a finite sample is one of the most ubiquitous and widely used technique in machine learning. When faced with independent data, there are now sharp tools available to analyze its success optimally under relatively general assumptions. Indeed, a non-asymptotic theory matching the classical asymptotically optimal understanding from statistics [van der Vaart, 2000] has been developed over the last decade [Hsu et al., 2012, Oliveira, 2016, Mourtada, 2022]. However, once we relax the independence assumption and move toward data that exhibits correlations, the situation is much less well-understood--even for a problem as seemingly simple as linear regression. While sharp asymptotics are available through various limit theorems, there are no general results matching these in the finite sample regime. In this paper, we study the instance-specific performance of ordinary least squares in a setting with dependent data--and in contrast to much contemporary work on the theme--without imposing realizability.


Neural Likelihood Approximation for Integer Valued Time Series Data

O'Loughlin, Luke, Maclean, John, Black, Andrew

arXiv.org Machine Learning

Stochastic processes defined on integer valued state spaces are popular within the physical A combination of factors such as non-linear dynamics, and biological sciences. These models are partial observation and a complicated latent structure necessary for capturing the dynamics of small makes the inference of parameter posteriors a systems where the individual nature of the challenging problem. The likelihood is generally intractable populations cannot be ignored and stochastic and although methods that can sample from effects are important. The inference of the the exact posteriors exist--many based on simulation parameters of such models, from time series of the model itself--these can become computationally data, is difficult due to intractability of the prohibitive in many situations (Doucet et al., 2015; likelihood; current methods, based on simulations Sherlock et al., 2015). Observations of a system with of the underlying model, can be so computationally low noise are particularly challenging for simulation expensive as to be prohibitive.